1. Purpose

The purpose of this policy document is to provide a framework for the use of Generative Artificial Intelligence Large Language Models (GenAI) such as Co-Pilot, Gemini, GPT Series, Grok and other similar tools by Council employees, Councillor’s, contractors, developers, vendors, temporary staff, consultants or other third parties, hereinafter referred to as ‘users’.

This policy is designed to ensure that the use of GenAI is ethical, complies with all applicable laws, regulations, and council policies, and complements the Council’s existing information and security policies.

The pace of development and application of GenAI is such that this policy will be in a constant state of development.

2. Use

This policy applies to all users with access to GenAI, whether through council-owned devices or personal devices used for council activities. These tools can be embedded in other software tools for example, Co-pilot has been integrated across all Microsoft365 apps.

Use of GenAI must be in a manner that promotes fairness and avoids bias to prevent discrimination and promote equal treatment and be in such a way as to contribute positively to the Council’s goals and values.

Users may use GenAI for work-related purposes subject to adherence to this policy. This includes tasks such as generating text or content for reports, emails, presentations, images, and customer service communications.  Such tasks must not include personal data processing.

If users want to explore the possibility of using AI for any other purpose, then they must contact the Projects Team. 

What are the rules that MUST be followed?

1.    Do not input personally identifiable information into GenAI tools.

2.    Do not input any personal data about service users even if it is not personally identifiable (including descriptions of any safeguarding incidents).

3.    Do not use for non-rule-based automated decision making.

4.    Do not input any commercially sensitive local authority data (e.g. payment card data, data that effects the operational security of the organisation, etc).

5.    Do not use a personal account for work related purposes.

6.    Users must read the Terms and Conditions of the relevant AI tool before using it for council activities.

7.    Where the GenAI tool has cited where it has obtained information from, then this should be referenced in user’s document/work.

8.    The user is responsible for evaluating the output generated by the AI. The user must check for accuracy, coherence, and appropriateness.

9.    Be clear with other staff and service users (your Privacy Notice could need amending) when you have used GenAI as part of your work if:

a.    You directly quote or use a significant proportion of the GenAI output.

b.    You use the GenAI output to meaningfully inform a decision you make.

c.     You need to ensure trust with particular groups you are communicating with (residents, political or senior leaders etc.).

10.  Follow any legal and regulatory requirements of the tool you are using, including any policies set by SBC.

How to use GenAI

Currently, GenAI is most commonly used to support individual tasks and act as a personal assistant. For example, GenAI can help you to:

·         create images and videos from scratch by simply telling a tool what you want to see.

·         come up with lots of new ideas in seconds - for example, coming up with icebreakers for meetings.

GenAI can help you be more productive by:

·         creating first drafts of an email or document for you to finish writing, and then find ways to improve the quality of your writing once you have done so.

·         quickly finding sources of information.

·         simplifying complex topics into understandable information.

·         summarising large text documents.

Particular attention should be given to Governance, Vendor practices, Copyright, Accuracy, Confidentiality, Disclosure, and Integration with other tools.

 

2.1 Governance

User’s must declare their responsible and ethical usage of GenAI by completing the ICT Personal Commitment Statement form. Users must also agree to responsible and ethical use of GenAI when accessing the network during user logon. Users must take responsibility for their use of AI by ensuring they verify information, use it ethically, and remain aware of potential biases. It is crucial to protect privacy and maintain human oversight to prevent misuse and unintended consequences. The Project Team has a risk evaluation document that addresses the risks associated with using GenAI.

 

 

 

 

2.2 Third Parties

Any use of GenAI technology in pursuit of Council activities should be done with full acknowledgement of the policies, practices, terms and conditions of developers or vendors of the relevant AI software tool.

 

2.3 Copyright 

Users must adhere to copyright laws when utilising GenAI. It is prohibited to use GenAI to generate content that infringes upon the intellectual property rights of others, including but not limited to copyrighted material. If a user is unsure whether a particular use of GenAI constitutes copyright infringement, they should contact Legal before using GenAI.

2.4 Accuracy

GenAI may inadvertently produce content that appears authentic but lacks factual basis, known as “hallucinations”. All information generated by GenAI must be reviewed and edited for accuracy prior to use. Users of GenAI are responsible for reviewing output and are accountable for ensuring the accuracy of GenAI generated output before use/release. If a user has any doubt about the accuracy of information generated by GenAI, they should not use GenAI.

2.5 Confidentiality

Confidential and personal information must not be entered into a GenAI tool, as information may enter the public domain. Users must follow all applicable data privacy laws and organisational policies when using GenAI.  If a user has any doubt about the confidentiality of information, they should not use GenAI.

2.6 Climate Change Impact

Generative AI has the potential to have a significant environmental impact, through its role of increasing energy demand and consumption, and therefore reliance on an energy supply. GenAI models consume large amounts of energy during the training and inference phases and require a substantial amount of energy to cool GenAI processors. As these models grow in size and complexity, and demand for AI services rise, the energy demands will continue to increase.

As a Local Authority, there is a duty to report on our Green House Gas (GHG) emissions. We currently report on Scope 1 emissions (emissions that a company makes directly e.g. burning of fossil fuels and fleet vehicles) and Scope 2 emissions (emissions used indirectly e.g. purchasing electricity). However, there is a drive to also report Scope 3 emissions (all the associated emissions that an organisation is indirectly responsible for e.g. buying products from suppliers which then produce emissions when customers use them), in which AI is included. Although there is uncertainty around the future environmental impact of AI, with possibilities of companies investing in clean energy sources to power data centres and therefore offsetting some of the projected increase in power demand, AI encompasses a carbon footprint which needs to be accounted for.

2.7 Social Impact and Equality

Users must be aware of how the use of GenAI may impact different groups of people in different ways, as it may have inherent social bias or have been trained in stereotypes. It may have inappropriate cultural values or display sensitive content. For example, GenAI must not be allowed to solely determine which customers have access to services. Human involvement remains essential in decision-making, and the Council should establish an appeal process for any automated or AI-informed decisions.

2.8 Ethical Use

GenAI must be used ethically and in compliance with all applicable legislation, regulations, and organisational policies. Users must not use GenAI to generate content that is discriminatory, offensive, or inappropriate. If there are any doubts about the appropriateness of using GenAI in a particular situation, users should consult with their supervisor or Information Governance Team.

2.9 Disclosure

Content produced via GenAI must be identified and disclosed as containing GenAI-generated information.

Footnote example:  Note: This document contains content generated by Artificial Intelligence (AI). AI generated content has been reviewed by the author for accuracy and edited/revised where necessary. The author takes responsibility for this content.

2.10 Integration with other tools

An Application Programming Interface (API) allows communication between different software systems. Plugins are additional components that enhance or add features to existing software. Both software features enable access to GenAI and extended functionality for other services, such as email, Teams, or search engines, to improve automation and productivity outputs. Users should consider the following guidance to reduce the frequency of unsafe content produced by GenAI tools:

·         Adversarial testing – test your product over a wide range of inputs and outputs to see if responses start to go “off topic”.

·         Human in the loop (HITL) – wherever possible, a human should review outputs before they are used in practice. Humans should be aware of limitations of the system.

·         “Know your customer” (KYC) – users should generally need to log in to access a service.

·         Allow users to report issues – users should have an easily accessible way to report improper functionality or other concerns about software behaviour. The reporting should be monitored by a human and responded to as appropriate.

·         Understand and communicate limitations – consider whether the GenAI tool being used is fit for your purpose as GenAI may not be suitable for every use.

·         End-user IDs – including user ID can help monitor and detect abuse of software.

 API and plugin tools must be rigorously tested for:

·         Moderation – to ensure the model properly handles hate, discriminatory, threatening, etc. inputs appropriately.

·         Factual responses – provide a ground of truth for the API and review responses accordingly.

3. Risks

Use of GenAI carry inherent risks. A comprehensive risk assessment must be conducted for any project or process where use of GenAI is proposed. The risk assessment should consider potential impacts including legal compliance; bias and discrimination; security (including technical protections and security certifications); and data sovereignty and protection.

3.1 Legal compliance

Data entered into GenAI tools may enter the public domain. This can release non-public information and breach regulatory requirements, service user or vendor contracts, or compromise intellectual property. Any release of private/personal information could result in a breach of relevant data protection laws. Use of GenAI to compile content may also infringe on regulations for the protection of intellectual property rights.Users should ensure that their use of any GenAI complies with all applicable laws and regulations and with council policies. Any potential or alleged breaches of this policy should be reported to the council’s Information Governance Team or senior management. Failure to report any potential or alleged breaches may result in disciplinary action, in accordance with council’s Human Resources policies and procedures.

 

3.2 Bias and discrimination

GenAI may make use of and generate biased, discriminatory, or offensive content. This can occur as a result of information entered by users or be an inherent fault of the tool because it has been trained on stereotypes. Users must use GenAI responsibly and ethically, in compliance with council policies and applicable laws and regulations.

3.3 Security

GenAI may store sensitive data and information, which could be at risk of being breached or hacked. The council must assess technical protections and security certification of GenAI before use. If a user has any doubt about the security of information input into GenAI, they should not use GenAI.

3.4 Data sovereignty and protection

While a GenAI platform may be hosted internationally, under data sovereignty rules information created or collected in the originating country will remain under jurisdiction of that country’s laws. The reverse also applies. If information is sourced from GenAI hosted overseas, the laws of the source country regarding its use and access may apply. GenAI service providers should be assessed for data sovereignty practice by any organisation wishing to use their GenAI.  

 4. Compliance

Any violations of this policy should be reported to the council’s Information Governance Team or senior management. Failure to comply with this policy may result in disciplinary action, in accordance with council’s Human Resources policies and procedures.

5. Review

This policy will be reviewed periodically and updated as necessary to ensure continued compliance with all applicable legislation, regulations, and organisational policies. 

6. Acknowledgment

By using GenAI, users acknowledge that they have read and understood these guidelines, including the risks associated with the use of GenAI.